Web Survey Bibliography
All social surveys suffer from different types of errors, of which one of the most studied is non-response bias. Non-response bias is a systematic error that occurs because individuals differ in their accessibility and propensity to participate in a survey according to their own characteristics as well as those from the survey itself. The extent of the problem heavily depends on the correlation between response mechanisms and key survey variables. However, non-response bias is difficult to measure or to correct for due to the lack of relevant data about the whole target population or sample. In this paper, non-response follow-up surveys are considered as a possible source of information about non-respondents. Non-response follow-ups, however, suffer from two methodological issues: they themselves operate through a response mechanism that can cause potential non-response bias, and they pose a problem of comparability of measure, mostly because the survey design differs between main survey and non-response follow-up. In order to detect possible bias, the survey variables included in non-response surveys have to be related to the mechanism of participation, but not be sensitive to measurement effects due to the different designs. Based on accumulated experience of four similar non-response follow-ups, we studied the survey variables that fulfill these conditions. We differentiated socio-demographic variables that are measurement-invariant but have a lower correlation with non-response and variables that measure attitudes, such as trust, social participation, or integration in the public sphere, which are more sensitive to measurement effects but potentially more appropriate to account for the non-response mechanism. Our results show that education level, work status, and living alone, as well as political interest, satisfaction with democracy, and trust in institutions are pertinent variables to include in non-response follow-ups of general social surveys.
All social surveys suffer from different types of errors, of which one of the most studied is non-response bias. Non-response bias is a systematic error that occurs because individuals differ in their accessibility and propensity to participate in a survey according to their own characteristics as well as those from the survey itself. The extent of the problem heavily depends on the correlation between response mechanisms and key survey variables. However, non-response bias is difficult to measure or to correct for due to the lack of relevant data about the whole target population or sample. In this paper, non-response follow-up surveys are considered as a possible source of information about non-respondents. Non-response follow-ups, however, suffer from two methodological issues: they themselves operate through a response mechanism that can cause potential non-response bias, and they pose a problem of comparability of measure, mostly because the survey design differs between main survey and non-response follow-up. In order to detect possible bias, the survey variables included in non-response surveys have to be related to the mechanism of participation, but not be sensitive to measurement effects due to the different designs. Based on accumulated experience of four similar non-response follow-ups, we studied the survey variables that fulfill these conditions. We differentiated socio-demographic variables that are measurement-invariant but have a lower correlation with non-response and variables that measure attitudes, such as trust, social participation, or integration in the public sphere, which are more sensitive to measurement effects but potentially more appropriate to account for the non-response mechanism. Our results show that education level, work status, and living alone, as well as political interest, satisfaction with democracy, and trust in institutions are pertinent variables to include in non-response follow-ups of general social surveys. - See more at: https://ojs.ub.uni-konstanz.de/srm/article/view/6138#sthash.CEiOvCVB.dpuf
Web survey bibliography (317)
- Overview: Online Surveys; 2017; Vehovar, V.; Lozar Manfreda, K.
- Respondent mode choice in a smartphone survey ; 2017; Conrad, F. G., Schober, M. F., Antoun, C., Yan, H. Y., Hupp, A., Johnston, M., Ehlen, P., Vickers, L...
- Collecting Data from mHealth Users via SMS Surveys: A Case Study in Kenya; 2016; Johnson, D.
- Electronic and paper based data collection methods in library and information science research: A comparative...; 2016; Tella, A.
- Stable Relationships, Stable Participation? The Effects of Partnership Dissolution and Changes in Relationship...; 2016; Mueller, B.; Castiglioni, L.
- Identifying Pertinent Variables for Nonresponse Follow-Up Surveys. Lessons Learned from 4 Cases in Switzerland...; 2016; Vandenplas, C.; Joye, D.; Staehli, M. E.; Pollien, A.
- The 2013 Census Test: Piloting Methods to Reduce 2020 Census Costs; 2016; Walejko, G. K.; Miller, P. V.
- The Validity of Surveys: Online and Offline; 2016; Wiersma, W.
- Methods can matter: Where Web surveys produce different results than phone interviews; 2016; Keeter, S.
- Do Polls Still Work If People Don't Answer Their Phones?; 2016; Edwards-Levy, A.; Jackson, N. M.
- HUFFPOLLSTER: Why Reaching Latinos Is A Challenge For Pollsters; 2016; Jackson, N. M.; Edwards-Levy, A.; Velencia, J.
- Comprehension and engagement in survey interviews with virtual agents; 2016; Conrad, F. G.; Schober, M. F.; Jans, M.; Orlowski, R. A.; Nielsen, D.; Levenstein, R. M.
- An Overview of Mobile CATI Issues in Europe; 2015; Slavec, A.; Toninelli, D.
- Using Mobile Phones for High-Frequency Data Collection; 2015; Azevedo, J. P.; Ballivian, A.; Durbin, W.
- Mixed mode surveys ; 2015; Burton, J.
- Two Are Better Than One: The Use of a Mixed-Mode Data Collection to Improve the Electoral Forecast; 2014; de Rada, V. D., Pasadas del Amo, S.
- The impact of contact effort on mode-specific selection and measurement bias; 2014; Schouten, B., van der Laan, J., Cobben, F.
- How much is shorter CAWI questionnaire VS CATI questionnaire?; 2014; Bartoli, B.
- Advantages of a global multimodal print & digital readership survey; 2013; Cour, N., Saint-Joanis, G.
- Relative Mode Effects on Data Quality in Mixed-Mode Surveys by an Instrumental Variable; 2013; Vannieuwenhuyze, J. T. A., Revilla, M.
- A report on the Confirmit Market Research Software Survey 2013; 2013; Macer, T., Wilson, S.
- Mode effect analysis and adjustment in a split-sample mixed-mode Web/CATI survey; 2013; Kolenikov, S., Kennedy, C.
- Evaluating the left‐right dimension: Category Selection Probing conducted in an online access...; 2013; Huefken , V.
- Methodological, legal and technical perspectives on the feasibility of web survey paradata in German...; 2013; Sattelberger, S.
- Impact of mode design on reliability in longitudinal data; 2013; Cernat, A.
- Exploring patterns of academic usage: A Google Scholar based study of ESS, EVS, WVS and ISSP academic...; 2013; Malnar, B.
- Web questionnaires in official population surveys: Do's and don'ts First experiments and impacts...; 2013; Blanke, K.
- Mode effects in Labour Force Surveys - do they really matter?; 2013; Koerner, T.
- Measuring the same concepts in several modes in the "BIBB/BAuA-Employee-Survey 2011/12" ; 2013; Gensicke, M., Tschersich, N., Hartmann, J.
- What works? Getting the General Population To Go Online in a Mixed Mode Local Health Survey; 2013; Frigault, L.-R., Azzou, S. A. K., Molloy, E. J. K., Ammarguellat, F., Couture, M., Gratton, J.
- Using Technology to Conduct Questionnaire Evaluations with Hard to Reach Populations ; 2013; Ridolfo, H., Ott, K.
- Mode Effects in a National Establishment Survey; 2013; Daley, K., Phillips, B. T.
- Evaluating the Effect of a Non-Monetary Incentive in a Nationally Representative Mixed-Mode Establishment...; 2013; Sengupta, M., Harris-Kojetin, L., Hobbs, M., Greene, A.
- Survey Reminder Method Experiment: An Examination of Cost Efficiency and Reminder Mode Salience in the...; 2013; Anderson, M., Rogers, B., CyBulski, K., Hall, J. W., Alderks, C. E., Milazzo-Sayre, L.
- Experiences from a probability-based Internet panel: Sample, recruitment and participation; 2013; Scherpenzeel, A.
- An Evaluation of Internet Versus Paper-based Methods for Public Participation Geographic Information...; 2012; Pocewicz, A.; Nielsen-Pincus, M.; Brown, G.; Schnitzer, R.
- Using paradata to explore item-level response times in surveys; 2012; Couper, M. P., Kreuter, F.
- Specialized Tools for Measuring Past Events ; 2012; Belli, R. F.
- Modes of Data Collection; 2012; Tourangeau, R.
- Mode and non-response effects and their treatment; 2012; Chrysanthopoulos, S., Georgostathi, A.
- “I think I know what you did last summer” Improving data quality in panel surveys; 2012; Lugtig, P. J.
- Using Text-to-Speech (TTS) for Audio-CASI; 2012; Couper, M. P., Kirgis, N., Buageila, S., Berglund, P.
- Does Mode Matter? Initial Evidence from the German Longitudinal Election Study (GLES); 2012; Blumenstiel, J. E., Rossmann, J.
- The Representativity of Web Surveys of the General Population compared to Traditional Modes and Mixed...; 2012; Klausch, L. T., Schouten, B., Hox, J.
- Effects of speeding on satisficing in Mixed-Mode Surveys; 2011; Bathelt, S., Bauknecht, J.
- Web based CATI on Amazon Elastic Compute Cloud and VirtualBox using queXS; 2011; Zammit, A.
- Web/Cloud Based CATI Using queXS; 2011; Zammit, A.
- When Referring to Mode, Is Expressed Preference the Same as Reality?; 2011; Denk, K.
- Three Era's of Survey Research; 2011; Groves, R. M.
- Testing a single mode vs a mixed mode design; 2011; Laaksonen, S.